skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Pyne, Edward"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The study of space-bounded computation has drawn extensively from ideas and results in the field of communication complexity. Catalytic Computation (Buhrman, Cleve, Koucký, Loff and Speelman, STOC 2013) studies the power of bounded space augmented with a pre-filled hard drive that can be used non-destructively during the computation. Presently, many structural questions in this model remain open. Towards a better understanding of catalytic space, we define a model of catalytic communication complexity and prove new upper and lower bounds. In our model, Alice and Bob share a blackboard with a tiny number of free bits, and a larger section with an arbitrary initial configuration. They must jointly compute a function of their inputs, communicating only via the blackboard, and must always reset the blackboard to its initial configuration. We prove several upper and lower bounds: 1) We characterize the simplest nontrivial model, that of one bit of free space and three rounds, in terms of 𝔽₂ rank. In particular, we give natural problems that are solvable with a minimal-sized blackboard that require near-maximal (randomized) communication complexity, and vice versa. 2) We show that allowing constantly many free bits, as opposed to one, allows an exponential improvement on the size of the blackboard for natural problems. To do so, we connect the problem to existence questions in extremal graph theory. 3) We give tight connections between our model and standard notions of non-uniform catalytic computation. Using this connection, we show that with an arbitrary constant number of rounds and bits of free space, one can compute all functions in TC⁰. We view this model as a step toward understanding the value of filled space in computation. 
    more » « less
    Free, publicly-accessible full text available January 7, 2026
  2. This paper revisits the study of two classical technical tools in theoretical computer science: Yao's trans-formation of distinguishers to next-bit predictors (FOCS 1982), and the “reconstruction paradigm” in pseudorandomness (e.g., as in Nisan and Wigderson, JCSS 1994). Recent works of Pyne, Raz, and Zhan (FOCS 2023) and Doron, Pyne, and Tell (STOC 2024) showed that both of these tools can be derandomized in the specific context of read-once branching programs (ROBPs), but left open the question of de randomizing them in more general settings. Our main contributions give appealing evidence that derandomization of the two tools is possible in general settings, show surprisingly strong consequences of such derandomization, and reveal several new settings where such derandomization is unconditionally possible for algorithms stronger than ROBPs (with useful consequences). Specifically: •We show that derandomizing these tools is equivalent to general derandomization. Specifically, we show that derandomizing distinguish - to- predict transformations is equivalent to prBPP=prP, and that derandomized reconstruction procedures (in a more general sense that we introduce) is equivalent to prBPP=prZPP. These statements hold even when scaled down to weak circuit classes and to algorithms that run in super-polynomial time. •Our main technical contributions are unconditional constructions of derandomized versions of Yao's transformation (or reductions of this task to other problems) for classes and for algorithms beyond ROBPs. Consequently, we deduce new results: A significant relaxation of the hypotheses required to derandomize the isolation lemma for logspace algorithms and deduce that NL=UL; and proofs that de-randomization necessitates targeted PRGs in catalytic logspace (unconditionally) and in logspace (conditionally). In addition, we introduce a natural subclass of prZPP that has been implicitly studied in recent works (Korten FOCS 2021, CCC 2022): The class of problems reducible to a problem called “Lossy Code”. We provide a structural characterization for this class in terms of derandomized reconstruction procedures, and show that this characterization is robust to several natural variations. Lastly, we present alternative proofs for classical results in the theory of pseudorandomness (such as two-sided derandomization reducing to one-sided), relying on the notion of deterministically transforming distinguishers to predictors as the main technical tool. 
    more » « less
  3. We introduce a novel family of expander-based error correcting codes. These codes can be sampled with randomness linear in the block-length, and achieve list decoding capacity (among other local properties). Our expander-based codes can be made starting from any family of sufficiently low-bias codes, and as a consequence, we give the first construction of a family of algebraic codes that can be sampled with linear randomness and achieve list-decoding capacity. We achieve this by introducing the notion of a pseudorandom puncturing of a code, where we select n indices of a base code C ⊂ 𝔽_q^m in a correlated fashion. Concretely, whereas a random linear code (i.e. a truly random puncturing of the Hadamard code) requires O(n log(m)) random bits to sample, we sample a pseudorandom linear code with O(n + log (m)) random bits by instantiating our pseudorandom puncturing as a length n random walk on an exapnder graph on [m]. In particular, we extend a result of Guruswami and Mosheiff (FOCS 2022) and show that a pseudorandom puncturing of a small-bias code satisfies the same local properties as a random linear code with high probability. As a further application of our techniques, we also show that pseudorandom puncturings of Reed-Solomon codes are list-recoverable beyond the Johnson bound, extending a result of Lund and Potukuchi (RANDOM 2020). We do this by instead analyzing properties of codes with large distance, and show that pseudorandom puncturings still work well in this regime. 
    more » « less
  4. Let L be a language that can be decided in linear space and let ϵ>0 be any constant. Let A be the exponential hardness assumption that for every n, membership in L for inputs of length n cannot be decided by circuits of size smaller than 2ϵn. We prove that for every function f:{0,1}∗→{0,1}, computable by a randomized logspace algorithm R, there exists a deterministic logspace algorithm D (attempting to compute f), such that on every input x of length n, the algorithm D outputs one of the following:1)The correct value f(x).2)The string: “I am unable to compute f(x) because the hardness assumption A is false”, followed by a (provenly correct) circuit of size smaller than 2ϵn′ for membership in L for inputs of length n′, for some n′=Θ(logn); that is, a circuit that refutes A. Moreover, D is explicitly constructed, given R.We note that previous works on the hardness-versus-randomness paradigm give derandomized algorithms that rely blindly on the hardness assumption. If the hardness assumption is false, the algorithms may output incorrect values, and thus a user cannot trust that an output given by the algorithm is correct. Instead, our algorithm D verifies the computation so that it never outputs an incorrect value. Thus, if D outputs a value for f(x), that value is certified to be correct. Moreover, if D does not output a value for f(x), it alerts that the hardness assumption was found to be false, and refutes the assumption.Our next result is a universal derandomizer for BPL (the class of problems solvable by bounded-error randomized logspace algorithms) 1 : We give a deterministic algorithm U that takes as an input a randomized logspace algorithm R and an input x and simulates the computation of R on x, deteriministically. Under the widely believed assumption BPL=L, the space ... 
    more » « less
  5. We give new upper and lower bounds on the power of several restricted classes of arbitrary-order read-once branching programs (ROBPs) and standard-order ROBPs (SOBPs) that have received significant attention in the literature on pseudorandomness for space-bounded computation. - Regular SOBPs of length n and width ⌊w(n+1)/2⌋ can exactly simulate general SOBPs of length n and width w, and moreover an n/2-o(n) blow-up in width is necessary for such a simulation. Our result extends and simplifies prior average-case simulations (Reingold, Trevisan, and Vadhan (STOC 2006), Bogdanov, Hoza, Prakriya, and Pyne (CCC 2022)), in particular implying that weighted pseudorandom generators (Braverman, Cohen, and Garg (SICOMP 2020)) for regular SOBPs of width poly(n) or larger automatically extend to general SOBPs. Furthermore, our simulation also extends to general (even read-many) oblivious branching programs. - There exist natural functions computable by regular SOBPs of constant width that are average-case hard for permutation SOBPs of exponential width. Indeed, we show that Inner-Product mod 2 is average-case hard for arbitrary-order permutation ROBPs of exponential width. - There exist functions computable by constant-width arbitrary-order permutation ROBPs that are worst-case hard for exponential-width SOBPs. - Read-twice permutation branching programs of subexponential width can simulate polynomial-width arbitrary-order ROBPs. 
    more » « less
  6. A spanner of a graph is a subgraph that preserves lengths of shortest paths up to a multiplicative distortion. For every k, a spanner with size O(n^{1+1/k}) and stretch (2k+1) can be constructed by a simple centralized greedy algorithm, and this is tight assuming Erdős girth conjecture. In this paper we study the problem of constructing spanners in a local manner, specifically in the Local Computation Model proposed by Rubinfeld et al. (ICS 2011). We provide a randomized Local Computation Agorithm (LCA) for constructing (2r-1)-spanners with Õ(n^{1+1/r}) edges and probe complexity of Õ(n^{1-1/r}) for r ∈ {2,3}, where n denotes the number of vertices in the input graph. Up to polylogarithmic factors, in both cases, the stretch factor is optimal (for the respective number of edges). In addition, our probe complexity for r = 2, i.e., for constructing a 3-spanner, is optimal up to polylogarithmic factors. Our result improves over the probe complexity of Parter et al. (ITCS 2019) that is Õ(n^{1-1/2r}) for r ∈ {2,3}. Both our algorithms and the algorithms of Parter et al. use a combination of neighbor-probes and pair-probes in the above-mentioned LCAs. For general k ≥ 1, we provide an LCA for constructing O(k²)-spanners with Õ(n^{1+1/k}) edges using O(n^{2/3}Δ²) neighbor-probes, improving over the Õ(n^{2/3}Δ⁴) algorithm of Parter et al. By developing a new randomized LCA for graph decomposition, we further improve the probe complexity of the latter task to be O(n^{2/3-(1.5-α)/k}Δ²), for any constant α > 0. This latter LCA may be of independent interest. 
    more » « less